1 [dev]Guidelines
张泊明518370910136 edited this page 2025-05-14 05:34:42 +08:00

General guidelines

Goals

The software should not be restricted to any course setup (language, features). Everything should be configurable using a simple text file. From a security point of view, runs are isolated and no "communication" between tasks/tests should be possible.

Common usage:

  • Features:

    • compilation
    • code quality checks
    • OJ
    • competition ranking
    • gitea interactions (post results to issues (eg. OJ scores) or repos (eg. score board))
  • Languages:

    • matlab/octave (101/151)
    • C/C++ (101/151/280/281)
    • rust (475)
    • latex/ocaml (477)
    • python (471)

New features and languages should be very easy to add (modular architecture).

Code expectations

Languages

Mostly golang for the software and TOML for the config files. Python/shell could potentially be used as glues

Software architecture

Various components share similar requirements:

  • OJ and CQ generate a score (computed differently)
  • CQ and OJ runs various tests, eg.
    • CQ:
      • check file-length, clang-tidy, cppcheck, cpplint
      • for each of the checks define what to "catch" (eg. max, recommended file-length, unchecked_malloc_result, global_variables` for clang-tidy, etc.)
      • assign a (negative) score for each failed test based on some weight (config file), eg. -5 for file-length larger than max and -10 for global variable
    • OJ:
      • each task features different mem/cpu limitations
      • for each task run different test cases
      • assign a score for each task based on some weight, eg. 1 for an easy test case and 10 for a harder one

Due to the very similar structure of the requirements we want to implement a general framework where the OJ is "yet another test". Specifics to each test should be implemented as "modules".

Ideally the framework should be "test-agnostic", ie independent from the test (clang-tidy, cppcheck, OJ, etc.).

UI

Interactions should occur through commit messages. Drone handles the connection with gitea. On a commit:

  1. Mandatory: check repo health (fail, return an error, and exit if anything goes wrong)
  2. Optional: compilation (based on config file)
  3. Optional: code quality (based on config file)
  4. Optional: OJ tests (based on commit message)
  5. Optional: compute scores
  6. Optional: send feedback based on commit message (eg. update grading sheet on a "correct" release tag or on a "grading: assignment" commit message and send a report as a gitea issue)

Notes

  • Drone should run through ssh: docker would assume a "secure" cluster (which we don't have with runners on SJTU servers.